Analyzing the Amazon Mechanical Turk marketplace
نویسندگان
چکیده
منابع مشابه
The Language Demographics of Amazon Mechanical Turk
We present a large scale study of the languages spoken by bilingual workers on Mechanical Turk (MTurk). We establish a methodology for determining the language skills of anonymous crowd workers that is more robust than simple surveying. We validate workers’ selfreported language skill claims by measuring their ability to correctly translate words, and by geolocating workers to see if they resid...
متن کاملUsing Amazon Mechanical Turk for linguistic research1
Amazon’s Mechanical Turk service makes linguistic experimentation quick, easy, and inexpensive. However, researchers have not been certain about its reliability. In a series of experiments, this paper compares data collected via Mechanical Turk to those obtained using more traditional methods One set of experiments measured the predictability of words in sentences using the Cloze sentence compl...
متن کاملClustering dictionary definitions using Amazon Mechanical Turk
Vocabulary tutors need word sense disambiguation (WSD) in order to provide exercises and assessments that match the sense of words being taught. Using expert annotators to build a WSD training set for all the words supported would be too expensive. Crowdsourcing that task seems to be a good solution. However, a first required step is to define what the possible sense labels to assign to word oc...
متن کاملRunning experiments on Amazon Mechanical Turk
Although Mechanical Turk has recently become popular among social scientists as a source of experimental data, doubts may linger about the quality of data provided by subjects recruited from online labor markets. We address these potential concerns by presenting new demographic data about the Mechanical Turk subject population, reviewing the strengths of Mechanical Turk relative to other online...
متن کاملActive Learning with Amazon Mechanical Turk
Supervised classification needs large amounts of annotated training data that is expensive to create. Two approaches that reduce the cost of annotation are active learning and crowdsourcing. However, these two approaches have not been combined successfully to date. We evaluate the utility of active learning in crowdsourcing on two tasks, named entity recognition and sentiment detection, and sho...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: XRDS: Crossroads, The ACM Magazine for Students
سال: 2010
ISSN: 1528-4972,1528-4980
DOI: 10.1145/1869086.1869094